Security News
Supply Chain Attack Detected in Solana's web3.js Library
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
The js-tokens npm package is a lightweight, regex-based lexer for JavaScript tokens. It can tokenize JavaScript code without the overhead of a full parser, making it useful for tasks like syntax highlighting or extracting specific tokens from code.
Tokenizing JavaScript code
This feature allows you to tokenize a string of JavaScript code into an array of token objects. Each token object describes a syntactic element of the code, such as a keyword, identifier, number, string, or operator.
const jsTokens = require('js-tokens');
const tokens = [...jsTokens('var x = 42;')];
console.log(tokens);
Acorn is a full JavaScript parser that can parse source code into an abstract syntax tree (AST). While js-tokens only tokenizes the code, Acorn can parse it and provide more detailed information about the structure of the code.
Esprima is another JavaScript parser that can convert code into an AST. It offers similar functionality to Acorn but has different API and extension points. Compared to js-tokens, Esprima provides a more comprehensive analysis of the code.
Babylon is the parser used by Babel, and it's capable of handling modern JavaScript features. It's more feature-rich than js-tokens, offering AST generation and the ability to handle experimental syntax through plugins.
A regex that tokenizes JavaScript.
var jsTokens = require("js-tokens")
var jsString = "var foo=opts.foo;\n..."
jsString.match(jsTokens)
// ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...]
npm install js-tokens
var jsTokens = require("js-tokens")
jsTokens
A regex with the g
flag that matches JavaScript tokens.
The regex always matches, even invalid JavaScript and the empty string.
The next match is always directly after the previous.
var token = jsTokens.matchToToken(match)
Takes a match
returned by jsTokens.exec(string)
, and returns a {type: String, value: String}
object. The following types are available:
Multi-line comments and strings also have a closed
property indicating if the
token was closed or not (see below).
Comments and strings both come in several flavors. To distinguish them, check if
the token starts with //
, /*
, '
, "
or `
.
Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use is-keyword-js to tell them apart.
Whitespace includes both line terminators and other whitespace.
For example usage, please see this gist.
The intention is to always support the latest stable ECMAScript version.
If adding support for a newer version requires changes, a new version with a major verion bump will be released.
Currently, ECMAScript 2015 is supported.
Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input.
Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input.
Unterminated regex literals are likely matched as division and whatever is inside the regex.
Invalid ASCII characters have their own capturing group.
Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace).
Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple.
Strings may contain invalid escape sequences.
Tokenizing JavaScript using regexes—in fact, one single regex—won’t be perfect. But that’s not the point either.
You may compare jsTokens with esprima by using esprima-compare.js
.
See npm run esprima-compare
!
Template strings are matched as single tokens, from the starting `
to the
ending `
, including interpolations (whose tokens are not matched
individually).
Matching template string interpolations requires recursive balancing of {
and
}
—something that JavaScript regexes cannot do. Only one level of nesting is
supported.
Consider this example:
var g = 9.82
var number = bar / 2/g
var regex = / 2/g
A human can easily understand that in the number
line we’re dealing with
division, and in the regex
line we’re dealing with a regex literal. How come?
Because humans can look at the whole code to put the /
characters in context.
A JavaScript regex cannot. It only sees forwards.
When the jsTokens
regex scans throught the above, it will see the following
at the end of both the number
and regex
rows:
/ 2/g
It is then impossible to know if that is a regex literal, or part of an expression dealing with division.
Here is a similar case:
foo /= 2/g
foo(/= 2/g)
The first line divides the foo
variable with 2/g
. The second line calls the
foo
function with the regex literal /= 2/g
. Again, since jsTokens
only
sees forwards, it cannot tell the two cases apart.
There are some cases where we can tell division and regex literals apart, though.
First off, we have the simple cases where there’s only one slash in the line:
var foo = 2/g
foo /= 2
Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line.
Secondly, not every character is a valid regex flag.
var number = bar / 2/e
The above example is also correctly identified as division, because e
is not a
valid regex flag. I initially wanted to future-proof by allowing [a-zA-Z]*
(any letter) as flags, but it is not worth it since it increases the amount of
ambigous cases. So only the standard g
, m
, i
, y
and u
flags are
allowed. This means that the above example will be identified as division as
long as you don’t rename the e
variable to some permutation of gmiyu
1 to 5
characters long.
Lastly, we can look forward for information.
+
, *
, &&
and ==
), but division
could likely be part of such an expression.Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as:
If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code).
Version 1.0.3 (2016-03-27)
FAQs
Tiny JavaScript tokenizer.
The npm package js-tokens receives a total of 30,744,213 weekly downloads. As such, js-tokens popularity was classified as popular.
We found that js-tokens demonstrated a healthy version release cadence and project activity because the last version was released less than a year ago. It has 1 open source maintainer collaborating on the project.
Did you know?
Socket for GitHub automatically highlights issues in each pull request and monitors the health of all your open source dependencies. Discover the contents of your packages and block harmful activity before you install or update your dependencies.
Security News
A supply chain attack has been detected in versions 1.95.6 and 1.95.7 of the popular @solana/web3.js library.
Research
Security News
A malicious npm package targets Solana developers, rerouting funds in 2% of transactions to a hardcoded address.
Security News
Research
Socket researchers have discovered malicious npm packages targeting crypto developers, stealing credentials and wallet data using spyware delivered through typosquats of popular cryptographic libraries.